223 research outputs found
Modelling of sediment transport and bed deformation in rivers with continuous bends
Peer reviewedPostprin
Numerical Modeling of Flood Control System in the Middle Yangtze River by Coupled Hydrological-Hydrodynamic Approach
Source: ICHE Conference Archive - https://mdi-de.baw.de/icheArchiv
Water-Sediment Regimes and River Health
Source: ICHE Conference Archive - https://mdi-de.baw.de/icheArchiv
Exploration of Problems and Key Points in Database Design in Software Development
Starting from the necessity and principles of database design, this article explores the optimization issues. Firstly, analyze the necessity of database design, elaborating on effective management, maintainability, resource utilization, and running speed; Then, a series of issues in database management were discussed, such as user management, data object design specifications, and overall design ideas; Finally, the optimization issues such as normalization rules, inter table redundancy handling, query optimization, indexing, and transactions were elaborated in detail. In the software development lifecycle, database design is indispensable. Its role is not only to ensure the safety and reliability of data, but also to ensure the overall stability and speed of the system. Strengthening the rationality and optimization of design is the key to improving software quality
GP-NAS-ensemble: a model for NAS Performance Prediction
It is of great significance to estimate the performance of a given model
architecture without training in the application of Neural Architecture Search
(NAS) as it may take a lot of time to evaluate the performance of an
architecture. In this paper, a novel NAS framework called GP-NAS-ensemble is
proposed to predict the performance of a neural network architecture with a
small training dataset. We make several improvements on the GP-NAS model to
make it share the advantage of ensemble learning methods. Our method ranks
second in the CVPR2022 second lightweight NAS challenge performance prediction
track
Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization
Counterfactually-Augmented Data (CAD) -- minimal editing of sentences to flip
the corresponding labels -- has the potential to improve the
Out-Of-Distribution (OOD) generalization capability of language models, as CAD
induces language models to exploit domain-independent causal features and
exclude spurious correlations. However, the empirical results of CAD's OOD
generalization are not as efficient as anticipated. In this study, we attribute
the inefficiency to the myopia phenomenon caused by CAD: language models only
focus on causal features that are edited in the augmentation operation and
exclude other non-edited causal features. Therefore, the potential of CAD is
not fully exploited. To address this issue, we analyze the myopia phenomenon in
feature space from the perspective of Fisher's Linear Discriminant, then we
introduce two additional constraints based on CAD's structural properties
(dataset-level and sentence-level) to help language models extract more
complete causal features in CAD, thereby mitigating the myopia phenomenon and
improving OOD generalization capability. We evaluate our method on two tasks:
Sentiment Analysis and Natural Language Inference, and the experimental results
demonstrate that our method could unlock the potential of CAD and improve the
OOD generalization performance of language models by 1.0% to 5.9%.Comment: Expert Systems With Applications 2023. arXiv admin note: text overlap
with arXiv:2302.0934
Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding
Chain-of-Thought (CoT) is a technique that guides Large Language Models
(LLMs) to decompose complex tasks into multi-step reasoning through
intermediate steps in natural language form. Briefly, CoT enables LLMs to think
step by step. However, although many Natural Language Understanding (NLU) tasks
also require thinking step by step, LLMs perform less well than small-scale
Masked Language Models (MLMs). To migrate CoT from LLMs to MLMs, we propose
Chain-of-Thought Tuning (CoTT), a two-step reasoning framework based on prompt
tuning, to implement step-by-step thinking for MLMs on NLU tasks. From the
perspective of CoT, CoTT's two-step framework enables MLMs to implement task
decomposition; CoTT's prompt tuning allows intermediate steps to be used in
natural language form. Thereby, the success of CoT can be extended to NLU tasks
through MLMs. To verify the effectiveness of CoTT, we conduct experiments on
two NLU tasks: hierarchical classification and relation extraction, and the
results show that CoTT outperforms baselines and achieves state-of-the-art
performance.Comment: EMNLP2023 Main Conferenc
- …